📚 node [[rnn|rnn]]
Welcome! Nobody has contributed anything to 'rnn|rnn' yet. You can:
  • Write something in the document below!
    • There is at least one public document in every node in the Agora. Whatever you write in it will be integrated and made available for the next visitor to read and edit.
  • Write to the Agora from social media.
    • If you follow Agora bot on a supported platform and include the wikilink [[rnn|rnn]] in a post, the Agora will link it here and optionally integrate your writing.
  • Sign up as a full Agora user.
    • As a full user you will be able to contribute your personal notes and resources directly to this knowledge commons. Some setup required :)
⥅ related node [[rnns]]
⥅ related node [[rnns recurrent neural networks]]
⥅ related node [[rnn]]
⥅ related node [[rchvngrlsgwrnnt]]
⥅ node [[rnn]] pulled by Agora
⥅ node [[rnns]] pulled by Agora

I read "The Unreasonable Effectiveness of Recurrent Neural Networks" by Andrej Karpathy this weekend. I saw it recommended/linked several times so I added it to my ML to-do/to-read list, and I've just gotten to it. Some highlights I copy/pasted into Keep follow:

As you might expect, the sequence regime of operation is much more powerful compared to fixed networks that are doomed from the get-go by a fixed number of computational steps, and hence also much more appealing for those of us who aspire to build more intelligent systems.

Then:

Moreover, as we’ll see in a bit, RNNs combine the input vector with their state vector with a fixed (but learned) function to produce a new state vector. This can in programming terms be interpreted as running a fixed program with certain inputs and some internal variables. Viewed this way, RNNs essentially describe programs. In fact, it is known that RNNs are Turing-Complete in the sense that they can to simulate arbitrary programs (with proper weights). If training vanilla neural nets is optimization over functions, training recurrent nets is optimization over programs.

Interesting. Karpathy does add a caveat about not reading too much into this, and I can see how this "universal program approximation" thing of RNNs has also other more indirect ties to "Turing completeness", in the sense that people sometimes get hung up on Turing completeness when in many cases it just isn't very relevant -- as in, it's a pretty low bar for a programming language or platform in the day-to-day and it doesn't mean much in practice. Still, the fact that RNNs trained character-by-character are able to pick up greater and greater levels of structure seems very promising. I found the visualizations of per-neuron activity very illuminating: Karpathy finds a neuron that "learns" to be "on" when inside a quotation, and another that gets activated as the text gets closer to where a newline would usually appear. This is all structure that a programmer would likely think about and code by hand they had to hand-code a text generator, and the network is just learning it independently from data.

The article is from 2015, but some people seem to think it's a bit dated by now -- not in its basic approach necessarily, but rather because convolutions have taken over from RNNs/LSTM in many domains. gwern left this comment in Hacker News (I swear I'm not stalking him, he just keeps popping up in the stuff I read):

If this were written today, Karpathy would have to call it "The Unreasonable Effectiveness of Convolutions". Since 2015, convolutions, causal or dilated convolutions, and especially convolutions with attention like the Transformer, have made remarkable inroads onto RNN territory and are now SOTA for most (all?) sequence-related tasks. Apparently RNNs just don't make very good use of that recurrency & hidden memory, and the non-locality & easy optimization of convolutions allow for much better performance through faster training & bigger models. Who knew?"

My current plan is to experiment a bit with RNNs/LSTM and then move on to convolutions.

⥅ node [[rnns-recurrent-neural-networks]] pulled by Agora

Recurrent neural networks

Go to [[Week 2 - Introduction]] or back to the [[Main AI Page]]

Part of the page on [[Neural Networks]]

Recurrent neural networks or RNNs, are recurrent because they perform the same task for every element of a sequence, with prior outputs feeding subsequent stage inputs.

A graphical representation of a Recurrent neural network

In a general neural network, an input is processed through a number of layers and an output is produced with an assumption that the two successive inputs are independent of each other, but that may not hold true in certain scenarios. For example, when we need to consider the context in which a word has been spoken, in such scenarios, dependence on previous observations has to be considered to produce the output.

RNNs can make use of information in long sequences, each layer of the network representing the observation at a certain time.

📖 stoas
⥱ context